How do predators respond to prey density?
- Holling’s Type II (Glutton): Eats fast, gets full quickly. \[ \frac{dN}{dt} = -\frac{aN}{1+aT_{h}N} \]
- Holling’s Type III (Learner): Slow start (learning), accelerates, then gets full. \[ \frac{dN}{dt} = -\frac{aN^2}{1+aT_{h}N^2} \]
where N = prey density, a = attack rate, \(T_{h}\) = handling time
Visualizing the Challenge
The Issue: These biological behaviors look mathematically identical at high prey densities.
Study Background
Before Starting Experiment:
Select a true model, this is the distribution that will be used to sample the number of prey consumed.
Select true values for the parameters
Select the number of iterations (I = 25), particles (N = 500), and time (t = 24 hrs)
Sequential Bayesian Framework
Start with a design point (experiment), which contains the conditions of a single experiment.
Draw parameter samples (particles) from the prior distributions
Simulate data under each model at that selected design point
Compute likelihoods for each model given the simulated data.
Update particle weights, and resample if effective sample size (ESS) is below the threshold N/2. ESS is a measure of of the efficiency of a particle set.
Update model probabilities and repeat
Sensitivity Analysis of Particle Count in SMC
Goal: Determine the computational efficiency of author’s original choice of N = 500 by analyzing cost-benefit trade-off.
Sequential Monte Carlo methods approximate the parameter posterior distributions using particles.
SMC requires a sufficient number of particles (N) to minimize the Monte Carlo Variance without causing excessive computational cost, or runtime.
Experiment Set Up
Computed marginal posterior distributions for: * Binomial and beta binomial * Type 2 and Type 3 functional response modes.
Kept design strategy constant at R = 0 (random design) to isolate effect of particle count N from the sequential design choices.
Tested four discrete particle counts:
\[N \in \{30, 100, 500 \text{ (Author's Choice)}, 1000\}\]
- Due to high computational costs, we only performed a Single-Run Sensitivity Analysis. We will be looking at runtime vs. posterior smoothness results.
Qualitative Results: N = 30 versus N = 500
N = 30
N = 500
Highly unreliable estimates for N = 30 and multi peaked
Qualitative Results: N = 100 versus N = 500
N = 100
N = 500
Improved smoothness for N = 100, still small bumps in the tails
Qualitative Results: N = 1000 versus N = 500
N = 1000
N = 500
N = 1000 has smoothest distributions, due to the Law of Large Numbers dictating that as N increases, the Monte Carlo variance decreases. We can be confident that N = 1000 and N = 500 are the statistically stable options, but have to take into account computation cost.